Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 43
Filter
Add more filters










Publication year range
1.
J Vis ; 24(3): 3, 2024 Mar 01.
Article in English | MEDLINE | ID: mdl-38441884

ABSTRACT

Humans acquire sensory information via fast, highly specialized detectors: For example, edge detectors monitor restricted regions of visual space over timescales of 100-200 ms. Surprisingly, this study demonstrates that their operation is nevertheless shaped by the ecological consistency of slow global statistical structure in the environment. In the experiments, humans acquired feature information from brief localized elements embedded within a virtual environment. Cast shadows are important for determining the appearance and layout of the environment. When the statistical reliability of shadows was manipulated, human feature detectors implicitly adapted to these changes over minutes, adjusting their response properties to emphasize either "image-based" or "object-based" anchoring of local visual elements. More specifically, local visual operators were more firmly anchored around object representations when shadows were reliable. As shadow reliability was reduced, visual operators disengaged from objects and became anchored around image features. These results indicate that the notion of sensory adaptation must be reframed around complex statistical constructs with ecological validity. These constructs far exceed the spatiotemporal selectivity bandwidth of sensory detectors, thus demonstrating the highly integrated nature of sensory processing during natural behavior.


Subject(s)
Sensation , Humans , Reproducibility of Results
2.
Neural Netw ; 152: 244-266, 2022 Aug.
Article in English | MEDLINE | ID: mdl-35567948

ABSTRACT

We assess whether deep convolutional networks (DCN) can account for a most fundamental property of human vision: detection/discrimination of elementary image elements (bars) at different contrast levels. The human visual process can be characterized to varying degrees of "depth," ranging from percentage of correct detection to detailed tuning and operating characteristics of the underlying perceptual mechanism. We challenge deep networks with the same stimuli/tasks used with human observers and apply equivalent characterization of the stimulus-response coupling. In general, we find that popular DCN architectures do not account for signature properties of the human process. For shallow depth of characterization, some variants of network-architecture/training-protocol produce human-like trends; however, more articulate empirical descriptors expose glaring discrepancies. Networks can be coaxed into learning those richer descriptors by shadowing a human surrogate in the form of a tailored circuit perturbed by unstructured input, thus ruling out the possibility that human-model misalignment in standard protocols may be attributable to insufficient representational power. These results urge caution in assessing whether neural networks do or do not capture human behavior: ultimately, our ability to assess "success" in this area can only be as good as afforded by the depth of behavioral characterization against which the network is evaluated. We propose a novel set of metrics/protocols that impose stringent constraints on the evaluation of DCN behavior as an adequate approximation to biological processes.


Subject(s)
Learning , Neural Networks, Computer , Humans
3.
J Vis ; 21(5): 9, 2021 05 03.
Article in English | MEDLINE | ID: mdl-33974037

ABSTRACT

Artistic composition (the structural organization of pictorial elements) is often characterized by some basic rules and heuristics, but art history does not offer quantitative tools for segmenting individual elements, measuring their interactions and related operations. To discover whether a metric description of this kind is even possible, we exploit a deep-learning algorithm that attempts to capture the perceptual mechanism underlying composition in humans. We rely on a robust behavioral marker with known relevance to higher-level vision: orientation judgements, that is, telling whether a painting is hung "right-side up." Humans can perform this task, even for abstract paintings. To account for this finding, existing models rely on "meaningful" content or specific image statistics, often in accordance with explicit rules from art theory. Our approach does not commit to any such assumptions/schemes, yet it outperforms previous models and for a larger database, encompassing a wide range of painting styles. Moreover, our model correctly reproduces human performance across several measurements from a new web-based experiment designed to test whole paintings, as well as painting fragments matched to the receptive-field size of different depths in the model. By exploiting this approach, we show that our deep learning model captures relevant characteristics of human orientation perception across styles and granularities. Interestingly, the more abstract the painting, the more our model relies on extended spatial integration of cues, a property supported by deeper layers.


Subject(s)
Deep Learning , Paintings , Cues , Humans , Judgment , Perception
4.
Trends Hear ; 25: 2331216520978029, 2021.
Article in English | MEDLINE | ID: mdl-33620023

ABSTRACT

Spectrotemporal modulations (STM) are essential features of speech signals that make them intelligible. While their encoding has been widely investigated in neurophysiology, we still lack a full understanding of how STMs are processed at the behavioral level and how cochlear hearing loss impacts this processing. Here, we introduce a novel methodological framework based on psychophysical reverse correlation deployed in the modulation space to characterize the mechanisms underlying STM detection in noise. We derive perceptual filters for young normal-hearing and older hearing-impaired individuals performing a detection task of an elementary target STM (a given product of temporal and spectral modulations) embedded in other masking STMs. Analyzed with computational tools, our data show that both groups rely on a comparable linear (band-pass)-nonlinear processing cascade, which can be well accounted for by a temporal modulation filter bank model combined with cross-correlation against the target representation. Our results also suggest that the modulation mistuning observed for the hearing-impaired group results primarily from broader cochlear filters. Yet, we find idiosyncratic behaviors that cannot be captured by cochlear tuning alone, highlighting the need to consider variability originating from additional mechanisms. Overall, this integrated experimental-computational approach offers a principled way to assess suprathreshold processing distortions in each individual and could thus be used to further investigate interindividual differences in speech intelligibility.


Subject(s)
Hearing Loss, Sensorineural , Speech Perception , Auditory Threshold , Hearing , Hearing Loss, Sensorineural/diagnosis , Humans , Noise/adverse effects , Perceptual Masking
5.
J Comput Neurosci ; 49(1): 1-20, 2021 02.
Article in English | MEDLINE | ID: mdl-33123952

ABSTRACT

The optimal template for signal detection in white additive noise is the signal itself: the ideal observer matches each stimulus against this template and selects the stimulus associated with largest match. In the noisy ideal observer, internal noise is added to the decision variable returned by the template. While the ideal observer represents an unrealistic approximation to the human visual process, the noisy ideal observer may be applicable under certain experimental conditions. For template values constrained to lie within a specified range, theory predicts that the template associated with a noisy ideal observer should be a clipped image of the signal, a result which we demonstrate analytically using variational calculus. It is currently unknown whether the human process conforms to theory. We report a targeted analysis of the theoretical prediction for an experimental protocol that maximizes template-matching on the part of human participants. We find indicative evidence to support the theoretical expectation when internal noise is compared across participants, but not within each participant. Our results indicate that implicit knowledge about internal variability in different individuals is reflected by their detection templates; no implicit knowledge is retained for internal-noise fluctuations experienced by a given participant during data collection. The results also indicate that template encoding is constrained by the dynamic range of weight specification, rather than the range of output values transduced by the template-matching process.


Subject(s)
Models, Neurological , Signal Detection, Psychological , Humans
6.
Anim Cogn ; 23(1): 41-53, 2020 Jan.
Article in English | MEDLINE | ID: mdl-31586279

ABSTRACT

We currently have limited knowledge about complex visual representations in teleosts. For the specific case of Siamese fighting fish (Betta splendens), we do not know whether they can represent much more than mere colour or size. In this study, we assess their visual capabilities using increasingly complex stimulus manipulations akin to those adopted in human psychophysical studies of higher-level perceptual processes, such as face recognition. Our findings demonstrate a surprisingly sophisticated degree of perceptual representation. Consistent with previous work in established teleost models like zebrafish (Danio rerio), we find that fighting fish can integrate different features (e.g. shape and motion) for visually guided behaviour; this integration process, however, operates in a more holistic fashion in the fighting fish. More specifically, their analysis of complex spatiotemporal patterns is primarily global rather than local, meaning that individual stimulus elements must cohere into an organized percept for effective behavioural drive. The configural nature of this perceptual process is reminiscent of how mammals represent socially relevant signals, notwithstanding the lack of cortical structures that are widely recognized to play a critical role in higher cognitive processes. Our results indicate that mammalian-centric accounts of social cognition present serious conceptual limitations, and in so doing they highlight the importance of understanding complex perceptual function from a general ethological perspective.


Subject(s)
Fishes , Social Behavior , Animals , Color
7.
PLoS Comput Biol ; 14(12): e1006585, 2018 12.
Article in English | MEDLINE | ID: mdl-30513091

ABSTRACT

Contrast is the most fundamental property of images. Consequently, any comprehensive model of biological vision must incorporate this attribute and provide a veritable description of its impact on visual perception. Current theoretical and computational models predict that vision should modify its characteristics at low contrast: for example, it should become broader (more lowpass) to protect from noise, as often demonstrated by individual neurons. We find that the opposite is true for human discrimination of elementary image elements: vision becomes sharper, not broader, as contrast approaches threshold levels. Furthermore, it suffers from increased internal variability at low contrast and it transitions from a surprisingly linear regime at high contrast to a markedly nonlinear processing mode in the low-contrast range. These characteristics are hard-wired in that they happen on a single trial without memory or expectation. Overall, the empirical results urge caution when attempting to interpret human vision from the standpoint of optimality and related theoretical constructs. Direct measurements of this phenomenon indicate that the actual constraints derive from intrinsic architectural features, such as the co-existence of complex-cell-like and simple-cell-like components. Small circuits built around these elements can indeed account for the empirical results, but do not appear to operate in a manner that conforms to optimality even approximately. More generally, our results provide a compelling demonstration of how far we still are from securing an adequate computational account of the most basic operations carried out by human vision.


Subject(s)
Pattern Recognition, Visual/physiology , Vision, Ocular/physiology , Visual Perception/physiology , Computational Biology/methods , Computer Simulation , Humans , Models, Neurological , Neurons/physiology , Nonlinear Dynamics
8.
PLoS Biol ; 15(8): e1002611, 2017 Aug.
Article in English | MEDLINE | ID: mdl-28827801

ABSTRACT

The structure of the physical world projects images onto our eyes. However, those images are often poorly representative of environmental structure: well-defined boundaries within the eye may correspond to irrelevant features of the physical world, while critical features of the physical world may be nearly invisible at the retinal projection. The challenge for the visual cortex is to sort these two types of features according to their utility in ultimately reconstructing percepts and interpreting the constituents of the scene. We describe a novel paradigm that enabled us to selectively evaluate the relative role played by these two feature classes in signal reconstruction from corrupted images. Our measurements demonstrate that this process is quickly dominated by the inferred structure of the environment, and only minimally controlled by variations of raw image content. The inferential mechanism is spatially global and its impact on early visual cortex is fast. Furthermore, it retunes local visual processing for more efficient feature extraction without altering the intrinsic transduction noise. The basic properties of this process can be partially captured by a combination of small-scale circuit models and large-scale network architectures. Taken together, our results challenge compartmentalized notions of bottom-up/top-down perception and suggest instead that these two modes are best viewed as an integrated perceptual mechanism.


Subject(s)
Models, Neurological , Neurons/physiology , Retina/physiology , Retinal Neurons/physiology , Vision, Ocular , Visual Cortex/physiology , Visual Perception , Algorithms , Biomarkers/analysis , Brain Mapping , Electroencephalography , Female , Humans , Male , Nerve Net/physiology , Spatial Processing , Visual Fields , Visual Pathways/physiology
9.
Vision Res ; 127: 104-114, 2016 10.
Article in English | MEDLINE | ID: mdl-27491704

ABSTRACT

All sensory devices, whether biological or artificial, carry appreciable amounts of intrinsic noise. When these internally generated perturbations are sufficiently large, the behaviour of the system is not solely driven by the external stimulus but also by its own spontaneous variability. Behavioural internal noise can be quantified, provided it is expressed in relative units of the noise source externally applied by the stimulus. In humans performing sensory tasks at near threshold performance, the size of internal noise is roughly equivalent to the size of the response fluctuations induced by the external noise source. It is not known how the human estimate compares with other animals, because behavioural internal noise has never been measured in other species. We have adapted the methodology used with humans to the zebrafish, a small teleost that displays robust visually-guided behaviour. Our measurements demonstrate that, under some conditions, it is possible to obtain viable estimates of internal noise in this vertebrate species; the estimates generally fall within the human range, suggesting that the properties of internal noise may reflect general constraints on stimulus-response coupling that apply across animal systems with substantially different characteristics.


Subject(s)
Behavior, Animal/physiology , Signal Detection, Psychological/physiology , Visual Perception/physiology , Zebrafish/physiology , Animals , Contrast Sensitivity/physiology , Models, Biological , Photic Stimulation/methods , Sensory Thresholds/physiology
10.
PLoS Comput Biol ; 12(7): e1005019, 2016 07.
Article in English | MEDLINE | ID: mdl-27398600

ABSTRACT

Sound waveforms convey information largely via amplitude modulations (AM). A large body of experimental evidence has provided support for a modulation (bandpass) filterbank. Details of this model have varied over time partly reflecting different experimental conditions and diverse datasets from distinct task strategies, contributing uncertainty to the bandwidth measurements and leaving important issues unresolved. We adopt here a solely data-driven measurement approach in which we first demonstrate how different models can be subsumed within a common 'cascade' framework, and then proceed to characterize the cascade via system identification analysis using a single stimulus/task specification and hence stable task rules largely unconstrained by any model or parameters. Observers were required to detect a brief change in level superimposed onto random level changes that served as AM noise; the relationship between trial-by-trial noisy fluctuations and corresponding human responses enables targeted identification of distinct cascade elements. The resulting measurements exhibit a dynamic complex picture in which human perception of auditory modulations appears adaptive in nature, evolving from an initial lowpass to bandpass modes (with broad tuning, Q∼1) following repeated stimulus exposure.


Subject(s)
Auditory Pathways/physiology , Auditory Perception/physiology , Task Performance and Analysis , Adult , Computational Biology , Humans , Noise , Young Adult
11.
PLoS Comput Biol ; 11(11): e1004499, 2015 Nov.
Article in English | MEDLINE | ID: mdl-26556758

ABSTRACT

It is generally acknowledged that biological vision presents nonlinear characteristics, yet linear filtering accounts of visual processing are ubiquitous. The template-matching operation implemented by the linear-nonlinear cascade (linear filter followed by static nonlinearity) is the most widely adopted computational tool in systems neuroscience. This simple model achieves remarkable explanatory power while retaining analytical tractability, potentially extending its reach to a wide range of systems and levels in sensory processing. The extent of its applicability to human behaviour, however, remains unclear. Because sensory stimuli possess multiple attributes (e.g. position, orientation, size), the issue of applicability may be asked by considering each attribute one at a time in relation to a family of linear-nonlinear models, or by considering all attributes collectively in relation to a specified implementation of the linear-nonlinear cascade. We demonstrate that human visual processing can operate under conditions that are indistinguishable from linear-nonlinear transduction with respect to substantially different stimulus attributes of a uniquely specified target signal with associated behavioural task. However, no specific implementation of a linear-nonlinear cascade is able to account for the entire collection of results across attributes; a satisfactory account at this level requires the introduction of a small gain-control circuit, resulting in a model that no longer belongs to the linear-nonlinear family. Our results inform and constrain efforts at obtaining and interpreting comprehensive characterizations of the human sensory process by demonstrating its inescapably nonlinear nature, even under conditions that have been painstakingly fine-tuned to facilitate template-matching behaviour and to produce results that, at some level of inspection, do conform to linear filtering predictions. They also suggest that compliance with linear transduction may be the targeted outcome of carefully crafted nonlinear circuits, rather than default behaviour exhibited by basic components.


Subject(s)
Models, Neurological , Vision, Ocular/physiology , Visual Cortex/physiology , Computational Biology , Humans , Nonlinear Dynamics
12.
J Neurosci ; 35(5): 1849-57, 2015 Feb 04.
Article in English | MEDLINE | ID: mdl-25653346

ABSTRACT

Autistic traits span a wide spectrum of behavioral departures from typical function. Despite the heterogeneous nature of autism spectrum disorder (ASD), there have been attempts at formulating unified theoretical accounts of the associated impairments in social cognition. A class of prominent theories capitalizes on the link between social interaction and visual perception: effective interaction with others often relies on discrimination of subtle nonverbal cues. It has been proposed that individuals with ASD may rely on poorer perceptual representations of other people's actions as returned by dysfunctional visual circuitry and that this, in turn, may lead to less effective interpretation of those actions for social behavior. It remains unclear whether such perceptual deficits exist in ASD: the evidence currently available is limited to specific aspects of action recognition, and the reported deficits are often attributable to cognitive factors that may not be strictly visual (e.g., attention). We present results from an exhaustive set of measurements spanning the entire action processing hierarchy, from motion detection to action interpretation, designed to factor out effects that are not selectively relevant to this function. Our results demonstrate that the ASD perceptual system returns functionally intact signals for interpreting other people's actions adequately; these signals can be accessed effectively when autistic individuals are prompted and motivated to do so under controlled conditions. However, they may fail to exploit them adequately during real-life social interactions.


Subject(s)
Child Development Disorders, Pervasive/physiopathology , Motion Perception , Adolescent , Case-Control Studies , Cognition , Humans , Male
13.
J Neurosci ; 34(25): 8449-61, 2014 Jun 18.
Article in English | MEDLINE | ID: mdl-24948800

ABSTRACT

Motion detection is a fundamental property of the visual system. The gold standard for studying and understanding this function is the motion energy model. This computational tool relies on spatiotemporally selective filters that capture the change in spatial position over time afforded by moving objects. Although the filters are defined in space-time, their human counterparts have never been studied in their native spatiotemporal space but rather in the corresponding frequency domain. When this frequency description is back-projected to spatiotemporal description, not all characteristics of the underlying process are retained, leaving open the possibility that important properties of human motion detection may have remained unexplored. We derived descriptors of motion detectors in native space-time, and discovered a large unexpected dynamic structure involving a >2× change in detector amplitude over the first ∼100 ms. This property is not predicted by the energy model, generalizes across the visual field, and is robust to adaptation; however, it is silenced by surround inhibition and is contrast dependent. We account for all results by extending the motion energy model to incorporate a small network that supports feedforward spread of activation along the motion trajectory via a simple gain-control circuit.


Subject(s)
Motion Perception/physiology , Photic Stimulation/methods , Psychomotor Performance/physiology , Space Perception/physiology , Time Perception/physiology , Female , Humans , Male , Time Factors
14.
J Neurosci ; 34(6): 2374-88, 2014 Feb 05.
Article in English | MEDLINE | ID: mdl-24501376

ABSTRACT

In the early stages of image analysis, visual cortex represents scenes as spatially organized maps of locally defined features (e.g., edge orientation). As image reconstruction unfolds and features are assembled into larger constructs, cortex attempts to recover semantic content for object recognition. It is conceivable that higher level representations may feed back onto early processes and retune their properties to align with the semantic structure projected by the scene; however, there is no clear evidence to either support or discard the applicability of this notion to the human visual system. Obtaining such evidence is challenging because low and higher level processes must be probed simultaneously within the same experimental paradigm. We developed a methodology that targets both levels of analysis by embedding low-level probes within natural scenes. Human observers were required to discriminate probe orientation while semantic interpretation of the scene was selectively disrupted via stimulus inversion or reversed playback. We characterized the orientation tuning properties of the perceptual process supporting probe discrimination; tuning was substantially reshaped by semantic manipulation, demonstrating that low-level feature detectors operate under partial control from higher level modules. The manner in which such control was exerted may be interpreted as a top-down predictive strategy whereby global semantic content guides and refines local image reconstruction. We exploit the novel information gained from data to develop mechanistic accounts of unexplained phenomena such as the classic face inversion effect.


Subject(s)
Orientation/physiology , Pattern Recognition, Visual/physiology , Photic Stimulation/methods , Semantics , Visual Cortex/physiology , Discrimination, Psychological/physiology , Humans , Visual Perception/physiology
15.
J Neural Eng ; 10(1): 016014, 2013 Feb.
Article in English | MEDLINE | ID: mdl-23337440

ABSTRACT

OBJECTIVE: Brains, like other physical devices, are inherently noisy. This source of variability is large, to the extent that internal noise often impacts human sensory processing more than externally induced (stimulus-driven) perturbations. Despite the fundamental nature of this phenomenon, its statistical distribution remains unknown: for the past 40 years it has been assumed Gaussian, but the applicability (or lack thereof) of this assumption has not been checked. APPROACH: We obtained detailed measurements of this process by exploiting an integrated approach that combines experimental, theoretical and computational tools from bioengineering applications of system identification and reverse correlation methodologies. MAIN RESULTS: The resulting characterization reveals that the underlying distribution is in fact not Gaussian, but well captured by the Laplace (double-exponential) distribution. SIGNIFICANCE: Potentially relevant to this result is the observation that image contrast follows leptokurtic distributions in natural scenes, suggesting that the properties of internal noise in human sensors may reflect environmental statistics.


Subject(s)
Biosensing Techniques , Models, Neurological , Photic Stimulation/methods , Statistical Distributions , Synaptic Transmission/physiology , Biosensing Techniques/instrumentation , Biosensing Techniques/methods , Brain/physiology , Brain/physiopathology , Humans , Photic Stimulation/instrumentation , Psychomotor Performance/physiology
16.
Front Psychol ; 3: 348, 2012.
Article in English | MEDLINE | ID: mdl-23015793

ABSTRACT

Lexical decision is one of the most frequently used tasks in word recognition research. Theoretical conclusions are typically derived from a linear model on the reaction times (RTs) of correct word trials only (e.g., linear regression and ANOVA). Although these models estimate random measurement error for RTs, considering only correct trials implicitly assumes that word/non-word categorizations are without noise: words receive a yes-response because they have been recognized, and they receive a no-response when they are not known. Hence, when participants are presented with the same stimuli on two separate occasions, they are expected to give the same response. We demonstrate that this not true and that responses in a lexical decision task suffer from inconsistency in participants' response choice, meaning that RTs of "correct" word responses include RTs of trials on which participants did not recognize the stimulus. We obtained estimates of this internal noise using established methods from sensory psychophysics (Burgess and Colborne, 1988). The results show similar noise values as in typical psychophysical signal detection experiments when sensitivity and response bias are taken into account (Neri, 2010). These estimates imply that, with an optimal choice model, only 83-91% of the response choices can be explained (i.e., can be used to derive theoretical conclusions). For word responses, word frequencies below 10 per million yield alarmingly low percentages of consistent responses (near 50%). The same analysis can be applied to RTs, yielding noise estimates about three times higher. Correspondingly, the estimated amount of consistent trial-level variance in RTs is only 8%. These figures are especially relevant given the recent popularity of trial-level lexical decision models using the linear mixed-effects approach (e.g., Baayen et al., 2008).

17.
Biol Cybern ; 106(8-9): 465-82, 2012 Oct.
Article in English | MEDLINE | ID: mdl-22854977

ABSTRACT

Single neurons in auditory cortex display highly selective spectrotemporal properties: their receptive fields modulate over small fractions of an octave and integrate across temporal windows of 100-200 ms. We investigated how these characteristics impact auditory behavior. Human observers were asked to detect a specific sound frequency masked by broadband noise; we adopted an experimental design which required the engagement of frequency-selective mechanisms to perform above chance. We then applied psychophysical reverse correlation to derive spectrotemporal perceptual filters for the assigned task. We were able to expose signatures of neuronal-like spectrotemporal tuning on a scale of 1/10 octave and 50-100 ms, but detailed modeling of our results showed that observers were not able to rely on the explicit output of these channels. Instead, human observers pooled from a large bank of highly selective channels via a weighting envelope poorly tuned for frequency (on a scale of 1.5 octave) with sluggish temporal dynamics, followed by a highly nonlinear max-like operation. We conclude that human detection of specific frequencies embedded within complex sounds suffers from a high degree of intrinsic spectrotemporal uncertainty, resulting in low efficiency values (<1 %) for this perceptual ability. Signatures of the underlying neural circuitry can be exposed, but there does not appear to be a direct line for accessing individual neural channels on a fine scale.


Subject(s)
Auditory Pathways/physiology , Models, Neurological , Neurons/physiology , Pitch Perception/physiology , Acoustic Stimulation , Humans
18.
Vision Res ; 69: 1-9, 2012 Sep 15.
Article in English | MEDLINE | ID: mdl-22835631

ABSTRACT

Human sensory processing is inherently noisy: if a participant is presented with the same set of stimuli multiple times and is asked to perform a task related to some property of the stimulus by pressing one of two buttons, the set of responses generated by the participant will differ on different presentations even though the set of stimuli remained the same. This response variability can be used to estimate the amount of internal noise (i.e. noise that is not present in the stimulus but in the participant's decision making process). The procedure by which the same set of stimuli is presented twice is referred to as double-pass (DP) methodology. This procedure is well-established, but there is no accepted recipe for how the repeated trials may be delivered (e.g. in the same order as they were originally presented, or in a different order); more importantly, it is not known whether the choice of delivery matters to the resulting estimates. Our results show that this factor (as well as feedback) has no measurable impact. We conclude that, for the purpose of estimating internal noise using the DP method, the system can be assumed to have no inter-trial memory.


Subject(s)
Auditory Perception/physiology , Memory/physiology , Visual Perception/physiology , Analysis of Variance , Humans , Signal Detection, Psychological
19.
J Neurophysiol ; 107(5): 1260-74, 2012 Mar.
Article in English | MEDLINE | ID: mdl-22131380

ABSTRACT

Attention is known to affect the response properties of sensory neurons in visual cortex. These effects have been traditionally classified into two categories: 1) changes in the gain (overall amplitude) of the response; and 2) changes in the tuning (selectivity) of the response. We performed an extensive series of behavioral measurements using psychophysical reverse correlation to understand whether/how these neuronal changes are reflected at the level of our perceptual experience. This question has been addressed before, but by different laboratories using different attentional manipulations and stimuli/tasks that are not directly comparable, making it difficult to extract a comprehensive and coherent picture from existing literature. Our results demonstrate that the effect of attention on response gain (not necessarily associated with tuning change) is relatively aspecific: it occurred across all the conditions we tested, including attention directed to a feature orthogonal to the primary feature for the assigned task. Sensory tuning, however, was affected primarily by feature-based attention and only to a limited extent by spatially directed attention, in line with existing evidence from the electrophysiological and behavioral literature.


Subject(s)
Attention/physiology , Photic Stimulation/methods , Psychomotor Performance/physiology , Visual Cortex/physiology , Visual Perception/physiology , Humans
20.
Front Psychol ; 2: 172, 2011.
Article in English | MEDLINE | ID: mdl-21886631

ABSTRACT

Visual cortex analyzes images by first extracting relevant details (e.g., edges) via a large array of specialized detectors. The resulting edge map is then relayed to a processing pipeline, the final goal of which is to attribute meaning to the scene. As this process unfolds, does the global interpretation of the image affect how local feature detectors operate? We characterized the local properties of human edge detectors while we manipulated the extent to which the statistical properties of the surrounding image conformed to those encountered in natural vision. Although some aspects of local processing were unaffected by contextual manipulations, we observed significant alterations in the operating characteristics of the detector which were solely attributable to a higher-level semantic interpretation of the scene, unrelated to lower-level aspects of image statistics. Our results suggest that it may be inaccurate to regard early feature detectors as operating outside the domain of higher-level vision; although there is validity in this approach, a full understanding of their properties requires the inclusion of knowledge-based effects specific to the statistical regularities found in the natural environment.

SELECTION OF CITATIONS
SEARCH DETAIL
...